Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Virtual reality (VR) systems for guiding remote physical tasks typically require object poses to be specified in absolute world coordinates. However, many of these tasks only need object poses to be specified relative to each other. Thus, supporting only absolute pose specification can create inefficiencies in giving or following task guidance when unnecessary constraints are imposed. We are developing a VR task-guidance system that avoids this by enabling relative 6DoF poses to be specified within subsets of objects. We describe our user interface, including how geometric relationships are specified and several ways in which they are visualized, and our plans for validating our approach against existing techniques.more » « lessFree, publicly-accessible full text available October 8, 2026
-
Copies (proxies) of objects are useful for selecting and manipulating objects in virtual reality (VR). Temporary proxies are destroyed after use and must be recreated for reuse. Permanent proxies persist after use for easy reselection, but can cause clutter. To investigate the benefits and drawbacks of permanent and temporary proxies, we conducted a user study in which participants performed 6DoF tasks with proxies in the Voodoo Dolls technique, revealing that permanent proxies were more efficient for hard reselection and preferred.more » « lessFree, publicly-accessible full text available March 8, 2026
-
Entity–Component–System (ECS) architectures are fundamental to many systems for developing extended reality (XR) applications. These applications often contain complex scenes and require intricately connected application logic to connect components together, making debugging and analysis difficult. Graph-based tools have been created to show actions in ECS-based scene hierarchies, but few address interactions that go beyond traditional hierarchical communication. To address this, we present an XR GUI for Mercury (a toolkit to handle cross-component ECS communication) that allows developers to view and edit relationships and interactions between scene entities in Mercury.more » « less
-
When collaborating relative to a shared 3D virtual object in mixed reality (MR), users may experience communication issues arising from differences in perspective. These issues include occlusion (e.g., one user not being able to see what the other is referring to) and inefficient spatial references (e.g., “to the left of this” may be confusing when users are positioned opposite to each other). This paper presents a novel technique for automatic perspective alignment in collaborative MR involving co-located interaction centered around a shared virtual object. To align one user’s perspective on the object with a collaborator’s, a local copy of the object and any other virtual elements that reference it (e.g., the collaborator’s hands) are dynamically transformed. The technique does not require virtual travel and preserves face-to-face interaction. We created a prototype application to demonstrate our technique and present an evaluation methodology for related MR collaboration and perspective alignment scenarios.more » « lessFree, publicly-accessible full text available April 25, 2026
-
Augmented reality (AR) has been used to guide users in multi-step tasks, providing information about the current step (cueing) or future steps (precueing). However, existing work exploring cueing and precueing a series of rigid-body transformations requiring rotation has only examined one-degree-of-freedom (DoF) rotations alone or in conjunction with 3DoF translations. In contrast, we address sequential tasks involving 3DoF rotations and 3DoF translations. We built a testbed to compare two types of visualizations for cueing and precueing steps. In each step, a user picks up an object, rotates it in 3D while translating it in 3D, and deposits it in a target 6DoF pose. Action-based visualizations show the actions needed to carry out a step and goal-based visualizations show the desired end state of a step. We conducted a user study to evaluate these visualizations and the efficacy of precueing. Participants performed better with goal-based visualizations than with action-based visualizations, and most effectively with goal-based visualizations aligned with the Euler axis. However, only a few of our participants benefited from precues, most likely because of the cognitive load of 3D rotations.more » « less
-
We present a prototype virtual reality user interface for robot teleoperation that supports high-level specification of 3D object positions and orientations in remote assembly tasks. Users interact with virtual replicas of task objects. They asynchronously assign multiple goals in the form of 6DoF destination poses without needing to be familiar with specific robots and their capabilities, and manage and monitor the execution of these goals. The user interface employs two different spatiotemporal visualizations for assigned goals: one represents all goals within the user’s workspace (Aggregated View), while the other depicts each goal within a separate world in miniature (Timeline View). We conducted a user study of the interface without the robot system to compare how these visualizations affect user efficiency and task load. The results show that while the Aggregated View helped the participants finish the task faster, the participants preferred the Timeline View.more » « less
-
In virtual reality (VR) teleoperation and remote task guidance, a remote user may need to assign tasks to local technicians or robots at multiple sites. We are interested in scenarios where the user works with one site at a time, but must maintain awareness of the other sites for future intervention. We present an instrumented VR testbed for exploring how different spatial layouts of site representations impact user performance. In addition, we investigate ways of supporting the remote user in handling errors and interruptions from sites other than the one with which they are currently working, and switching between sites. We conducted a pilot study and explored how these factors affect user performance.more » « less
-
We explore Spatial Augmented Reality (SAR) precues (predictive cues) for procedural tasks within and between workspaces and for visualizing multiple upcoming steps in advance. We designed precues based on several factors: cue type, color transparency, and multi-level (number of precues). Precues were evaluated in a procedural task requiring the user to press buttons in three surrounding workspaces. Participants performed fastest in conditions where tasks were linked with line cues with different levels of color transparency. Precue performance was also affected by whether the next task was in the same workspace or a different one.more » « less
-
Built to Order: A Virtual Reality Interface for Assigning High-Level Assembly Goals to Remote RobotsMany real-world factory tasks require human expertise and involvement for robot control. However, traditional robot operation requires that users undergo extensive and time-consuming robot-specific training to understand the specific constraints of each robot. We describe a user interface that supports a user in assigning and monitoring remote assembly tasks in Virtual Reality (VR) through high-level goal-based instructions rather than low-level direct control. Our user interface is part of a testbed in which a motion-planning algorithm determines, verifies, and executes robot-specific trajectories in simulation.more » « less
An official website of the United States government
